Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 91
Filtrar
1.
Sci Rep ; 14(1): 2469, 2024 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-38291126

RESUMO

Sound localization is essential to perceive the surrounding world and to interact with objects. This ability can be learned across time, and multisensory and motor cues play a crucial role in the learning process. A recent study demonstrated that when training localization skills, reaching to the sound source to determine its position reduced localization errors faster and to a greater extent as compared to just naming sources' positions, despite the fact that in both tasks, participants received the same feedback about the correct position of sound sources in case of wrong response. However, it remains to establish which features have made reaching to sound more effective as compared to naming. In the present study, we introduced a further condition in which the hand is the effector providing the response, but without it reaching toward the space occupied by the target source: the pointing condition. We tested three groups of participants (naming, pointing, and reaching groups) each while performing a sound localization task in normal and altered listening situations (i.e. mild-moderate unilateral hearing loss) simulated through auditory virtual reality technology. The experiment comprised four blocks: during the first and the last block, participants were tested in normal listening condition, while during the second and the third in altered listening condition. We measured their performance, their subjective judgments (e.g. effort), and their head-related behavior (through kinematic tracking). First, people's performance decreased when exposed to asymmetrical mild-moderate hearing impairment, more specifically on the ipsilateral side and for the pointing group. Second, we documented that all groups decreased their localization errors across altered listening blocks, but the extent of this reduction was higher for reaching and pointing as compared to the naming group. Crucially, the reaching group leads to a greater error reduction for the side where the listening alteration was applied. Furthermore, we documented that, across blocks, reaching and pointing groups increased the implementation of head motor behavior during the task (i.e., they increased approaching head movements toward the space of the sound) more than naming. Third, while performance in the unaltered blocks (first and last) was comparable, only the reaching group continued to exhibit a head behavior similar to those developed during the altered blocks (second and third), corroborating the previous observed relationship between the reaching to sounds task and head movements. In conclusion, this study further demonstrated the effectiveness of reaching to sounds as compared to pointing and naming in the learning processes. This effect could be related both to the process of implementing goal-directed motor actions and to the role of reaching actions in fostering the implementation of head-related motor strategies.


Assuntos
Perda Auditiva , Localização de Som , Realidade Virtual , Humanos , Audição/fisiologia , Localização de Som/fisiologia , Testes Auditivos
2.
Cogn Res Princ Implic ; 9(1): 4, 2024 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-38191869

RESUMO

Localizing sounds in noisy environments can be challenging. Here, we reproduce real-life soundscapes to investigate the effects of environmental noise on sound localization experience. We evaluated participants' performance and metacognitive assessments, including measures of sound localization effort and confidence, while also tracking their spontaneous head movements. Normal-hearing participants (N = 30) were engaged in a speech-localization task conducted in three common soundscapes that progressively increased in complexity: nature, traffic, and a cocktail party setting. To control visual information and measure behaviors, we used visual virtual reality technology. The results revealed that the complexity of the soundscape had an impact on both performance errors and metacognitive evaluations. Participants reported increased effort and reduced confidence for sound localization in more complex noise environments. On the contrary, the level of soundscape complexity did not influence the use of spontaneous exploratory head-related behaviors. We also observed that, irrespective of the noisy condition, participants who implemented a higher number of head rotations and explored a wider extent of space by rotating their heads made lower localization errors. Interestingly, we found preliminary evidence that an increase in spontaneous head movements, specifically the extent of head rotation, leads to a decrease in perceived effort and an increase in confidence at the single-trial level. These findings expand previous observations regarding sound localization in noisy environments by broadening the perspective to also include metacognitive evaluations, exploratory behaviors and their interactions.


Assuntos
Movimentos da Cabeça , Localização de Som , Humanos , Som , Comportamento Exploratório , Processos Mentais
3.
Artigo em Inglês | MEDLINE | ID: mdl-37971362

RESUMO

Metacognition entails knowledge of one's own cognitive skills, perceived self-efficacy and locus of control when performing a task, and performance monitoring. Age-related changes in metacognition have been observed in metamemory, whereas their occurrence for hearing remained unknown. We tested 30 older and 30 younger adults with typical hearing, to assess if age reduces metacognition for hearing sentences in noise. Metacognitive monitoring for older and younger adults was overall comparable. In fact, the older group achieved better monitoring for words in the second part of the phrase. Additionally, only older adults showed a correlation between performance and perceived confidence. No age differentiation was found for locus of control, knowledge or self-efficacy. This suggests intact metacognitive skills for hearing in noise in older adults, alongside a somewhat paradoxical overconfidence in younger adults. These findings support exploiting metacognition for older adults dealing with noisy environments, since metacognition is central for implementing self-regulation strategies.

4.
Trends Hear ; 27: 23312165231182289, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37611181

RESUMO

Lateralized sounds can orient visual attention, with benefits for audio-visual processing. Here, we asked to what extent perturbed auditory spatial cues-resulting from cochlear implants (CI) or unilateral hearing loss (uHL)-allow this automatic mechanism of information selection from the audio-visual environment. We used a classic paradigm from experimental psychology (capture of visual attention with sounds) to probe the integrity of audio-visual attentional orienting in 60 adults with hearing loss: bilateral CI users (N = 20), unilateral CI users (N = 20), and individuals with uHL (N = 20). For comparison, we also included a group of normal-hearing (NH, N = 20) participants, tested in binaural and monaural listening conditions (i.e., with one ear plugged). All participants also completed a sound localization task to assess spatial hearing skills. Comparable audio-visual orienting was observed in bilateral CI, uHL, and binaural NH participants. By contrast, audio-visual orienting was, on average, absent in unilateral CI users and reduced in NH listening with one ear plugged. Spatial hearing skills were better in bilateral CI, uHL, and binaural NH participants than in unilateral CI users and monaurally plugged NH listeners. In unilateral CI users, spatial hearing skills correlated with audio-visual-orienting abilities. These novel results show that audio-visual-attention orienting can be preserved in bilateral CI users and in uHL patients to a greater extent than unilateral CI users. This highlights the importance of assessing the impact of hearing loss beyond auditory difficulties alone: to capture to what extent it may enable or impede typical interactions with the multisensory environment.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Perda Auditiva Unilateral , Perda Auditiva , Localização de Som , Percepção da Fala , Adulto , Humanos , Sinais (Psicologia) , Audição , Implante Coclear/métodos
5.
J Clin Med ; 12(6)2023 Mar 17.
Artigo em Inglês | MEDLINE | ID: mdl-36983357

RESUMO

Unilateral hearing loss (UHL) leads to an alteration of binaural cues resulting in a significant increment of spatial errors in the horizontal plane. In this study, nineteen patients with UHL were recruited and randomized in a cross-over design into two groups; a first group (n = 9) that received spatial audiovisual training in the first session and a non-spatial audiovisual training in the second session (2 to 4 weeks after the first session). A second group (n = 10) received the same training in the opposite order (non-spatial and then spatial). A sound localization test using head-pointing (LOCATEST) was completed prior to and following each training session. The results showed a significant decrease in head-pointing localization errors after spatial training for group 1 (24.85° ± 15.8° vs. 16.17° ± 11.28°; p < 0.001). The number of head movements during the spatial training for the 19 participants did not change (p = 0.79); nonetheless, the hand-pointing errors and reaction times significantly decreased at the end of the spatial training (p < 0.001). This study suggests that audiovisual spatial training can improve and induce spatial adaptation to a monaural deficit through the optimization of effective head movements. Virtual reality systems are relevant tools that can be used in clinics to develop training programs for patients with hearing impairments.

6.
Eur Arch Otorhinolaryngol ; 280(8): 3661-3672, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36905419

RESUMO

BACKGROUND AND PURPOSE: Use of unilateral cochlear implant (UCI) is associated with limited spatial hearing skills. Evidence that training these abilities in UCI user is possible remains limited. In this study, we assessed whether a Spatial training based on hand-reaching to sounds performed in virtual reality improves spatial hearing abilities in UCI users METHODS: Using a crossover randomized clinical trial, we compared the effects of a Spatial training protocol with those of a Non-Spatial control training. We tested 17 UCI users in a head-pointing to sound task and in an audio-visual attention orienting task, before and after each training.
Study is recorded in clinicaltrials.gov (NCT04183348). RESULTS: During the Spatial VR training, sound localization errors in azimuth decreased. Moreover, when comparing head-pointing to sounds before vs. after training, localization errors decreased after the Spatial more than the control training. No training effects emerged in the audio-visual attention orienting task. CONCLUSIONS: Our results showed that sound localization in UCI users improves during a Spatial training, with benefits that extend also to a non-trained sound localization task (generalization). These findings have potentials for novel rehabilitation procedures in clinical contexts.


Assuntos
Implante Coclear , Implantes Cocleares , Localização de Som , Percepção da Fala , Humanos , Audição , Implante Coclear/métodos , Testes Auditivos/métodos
7.
Cognition ; 234: 105355, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36791607

RESUMO

Bayesianism assumes that probabilistic updating does not depend on the sensory modality by which information is processed. In this study, we investigate whether probability judgments based on visual and auditory information conform to this assumption. In a series of five experiments, we found that this is indeed the case when information is acquired through a single modality (i.e., only auditory or only visual) but not necessarily so when it comes from multiple modalities (i.e., audio-visual). In the latter case, judgments prove more accurate when both visual and auditory information individually support (i.e., increase the probability of) the hypothesis they also jointly support (synergy condition) than when either visual or auditory information support one hypothesis that is not the one they jointly support (contrast condition). In the extreme case in which both visual and auditory information individually support an alternative hypothesis to the one they jointly support (i.e., double-contrast condition), participants' accuracy is not only lower than in the synergy condition but near chance. This synergy-contrast effect represents a violation of the assumption that information modality is irrelevant for Bayesian updating and indicates an important limitation of multisensory integration, one which has not been previously documented.


Assuntos
Percepção Auditiva , Percepção Visual , Humanos , Teorema de Bayes , Resolução de Problemas , Julgamento , Estimulação Acústica , Estimulação Luminosa
8.
Front Hum Neurosci ; 17: 1108354, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36816496

RESUMO

In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.

9.
Conscious Cogn ; 109: 103490, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36842317

RESUMO

In spoken languages, face masks represent an obstacle to speech understanding and influence metacognitive judgments, reducing confidence and increasing effort while listening. To date, all studies on face masks and communication involved spoken languages and hearing participants, leaving us with no insight on how masked communication impacts on non-spoken languages. Here, we examined the effects of face masks on sign language comprehension and metacognition. In an online experiment, deaf participants (N = 60) watched three parts of a story signed without mask, with a transparent mask or with an opaque mask, and answered questions about story content, as well as their perceived effort, feeling of understanding, and confidence in their answers. Results showed that feeling of understanding and perceived effort worsened as the visual condition changed from no mask to transparent or opaque masks, while comprehension of the story was not significantly different across visual conditions. We propose that metacognitive effects could be due to the reduction of pragmatic, linguistic and para-linguistic cues from the lower face, hidden by the mask. This reduction could impact on lower-face linguistic components perception, attitude attribution, classification of emotions and prosody of a conversation, driving the observed effects on metacognitive judgments but leaving sign language comprehension substantially unchanged, even if with a higher effort. These results represent a novel step towards better understanding what drives metacognitive effects of face masks while communicating face to face and highlight the importance of including the metacognitive dimension in human communication research.


Assuntos
Metacognição , Humanos , Compreensão , Máscaras , Fala , Percepção Auditiva
10.
Int J Pediatr Otorhinolaryngol ; 165: 111421, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36669271

RESUMO

BACKGROUND: Language and communication outcomes in children with congenital sensorineural hearing loss (cSNHL) are highly variable, and some of this variance can be attributed to the quantity and quality of language input. In this paper, we build from the evidence that human language is inherently multimodal and positive scaffolding of children's linguistic, cognitive, and social-relational development can be supported by Parent Centered Early Interventions (PCEI), to suggest that the use of gestures in these interventions could be a beneficial approach, yet scarcely explored. AIMS AND METHODS: This systematic review aimed to examine the literature on PCEI focused on gestures (symbolic and deictic) used to enhance the caregiver-child relationship and infant's language development, in both typically and atypically developing populations. The systematic review was conducted following the PRISMA guidelines for systematic reviews and meta-analyses. From 246 identified studies, 8 met PICO inclusion criteria and were eligible for inclusion. Two reviewers screened papers before completing data extraction and risk of bias assessment using the RoB2 Cochrane scale. RESULTS: Included studies measured the effect of implementing symbolic or deictic gestures in daily communication on the relational aspects of mother/parent-child interaction or on language skills in infants. The studies indicate that gesture-oriented PCEI may benefit deprived populations such as atypically developing children, children from low-income families, and children who, for individual reasons, lag behind their peers in communication. CONCLUSIONS: Although gesture-oriented PCEI appear to be beneficial in the early intervention for atypically developing populations, this approach has been so far scarcely explored directly in the context of hearing loss. Yet, symbolic gestures being a natural part of early vocabulary acquisition that emerges spontaneously regardless of hearing status, this approach could represent a promising line of intervention in infants with cSNHL, especially those with a worse head start.


Assuntos
Surdez , Gestos , Humanos , Lactente , Comunicação , Idioma , Desenvolvimento da Linguagem , Relações Pais-Filho
11.
Artigo em Inglês | MEDLINE | ID: mdl-35531868

RESUMO

Recent work suggests that age-related hearing loss (HL) is a possible risk factor for cognitive decline in older adults. Resulting poor speech recognition negatively impacts cognitive, social and emotional functioning and may relate to dementia. However, little is known about the consequences of hearing loss on other non-linguistic domains of cognition. The aim of this study was to investigate the role of HL on covert orienting of attention, selective attention and executive control. We compared older adults with and without mild to moderate hearing loss (26-60 dB) performing (1) a spatial cueing task with uninformative central cues (social vs. nonsocial cues), (2) a flanker task and (3) a neuropsychological assessment of attention. The results showed that overall response times and flanker interference effects were comparable across groups. However, in spatial cueing of attention using social and nonsocial cues, hearing impaired individuals were characterized by reduced validity effects, though no additional group differences were found between social and nonsocial cues. Hearing impaired individuals also demonstrated diminished performance on the Montreal Cognitive Assessment (MoCA) and on tasks requiring divided attention and flexibility. This work indicates that while response speed and response inhibition appear to be preserved following mild-to-moderate acquired hearing loss, orienting of attention, divided attention and the ability to flexibly allocate attentional resources are more deteriorated in older adults with HL. This work suggests that hearing loss might exacerbate the detrimental influences of aging on visual attention.


Assuntos
Sinais (Psicologia) , Perda Auditiva , Humanos , Idoso , Cognição , Tempo de Reação/fisiologia , Envelhecimento/fisiologia
12.
Ear Hear ; 44(1): 189-198, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35982520

RESUMO

OBJECTIVES: We assessed if spatial hearing training improves sound localization in bilateral cochlear implant (BCI) users and whether its benefits can generalize to untrained sound localization tasks. DESIGN: In 20 BCI users, we assessed the effects of two training procedures (spatial versus nonspatial control training) on two different tasks performed before and after training (head-pointing to sound and audiovisual attention orienting). In the spatial training, participants identified sound position by reaching toward the sound sources with their hand. In the nonspatial training, comparable reaching movements served to identify sound amplitude modulations. A crossover randomized design allowed comparison of training procedures within the same participants. Spontaneous head movements while listening to the sounds were allowed and tracked to correlate them with localization performance. RESULTS: During spatial training, BCI users reduced their sound localization errors in azimuth and adapted their spontaneous head movements as a function of sound eccentricity. These effects generalized to the head-pointing sound localization task, as revealed by greater reduction of sound localization error in azimuth and more accurate first head-orienting response, as compared to the control nonspatial training. BCI users benefited from auditory spatial cues for orienting visual attention, but the spatial training did not enhance this multisensory attention ability. CONCLUSIONS: Sound localization in BCI users improves with spatial reaching-to-sound training, with benefits to a nontrained sound localization task. These findings pave the way to novel rehabilitation procedures in clinical contexts.


Assuntos
Implante Coclear , Implantes Cocleares , Localização de Som , Humanos , Percepção Auditiva/fisiologia , Implante Coclear/métodos , Audição/fisiologia , Testes Auditivos/métodos , Localização de Som/fisiologia , Estudos Cross-Over
13.
Sci Rep ; 12(1): 19036, 2022 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-36351944

RESUMO

It is evident that the brain is capable of large-scale reorganization following sensory deprivation, but the extent of such reorganization is to date, not clear. The auditory modality is the most accurate to represent temporal information, and deafness is an ideal clinical condition to study the reorganization of temporal representation when the audio signal is not available. Here we show that hearing, but not deaf individuals, show a strong ERP response to visual stimuli in temporal areas during a time-bisection task. This ERP response appears 50-90 ms after the flash and recalls some aspects of the N1 ERP component usually elicited by auditory stimuli. The same ERP is not evident for a visual space-bisection task, suggesting that the early recruitment of temporal cortex is specific for building a highly resolved temporal representation within the visual modality. These findings provide evidence that the lack of auditory input can interfere with typical development of complex visual temporal representations.


Assuntos
Córtex Auditivo , Surdez , Humanos , Estimulação Luminosa , Imageamento por Ressonância Magnética , Audição , Mapeamento Encefálico , Córtex Auditivo/fisiologia
14.
Front Hum Neurosci ; 16: 1026056, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36310849

RESUMO

Moving the head while a sound is playing improves its localization in human listeners, in children and adults, with or without hearing problems. It remains to be ascertained if this benefit can also extend to aging adults with hearing-loss, a population in which spatial hearing difficulties are often documented and intervention solutions are scant. Here we examined performance of elderly adults (61-82 years old) with symmetrical or asymmetrical age-related hearing-loss, while they localized sounds with their head fixed or free to move. Using motion-tracking in combination with free-field sound delivery in visual virtual reality, we tested participants in two auditory spatial tasks: front-back discrimination and 3D sound localization in front space. Front-back discrimination was easier for participants with symmetrical compared to asymmetrical hearing-loss, yet both groups reduced their front-back errors when head-movements were allowed. In 3D sound localization, free head-movements reduced errors in the horizontal dimension and in a composite measure that computed errors in 3D space. Errors in 3D space improved for participants with asymmetrical hearing-impairment when the head was free to move. These preliminary findings extend to aging adults with hearing-loss the literature on the advantage of head-movements on sound localization, and suggest that the disparity of auditory cues at the two ears can modulate this benefit. These results point to the possibility of taking advantage of self-regulation strategies and active behavior when promoting spatial hearing skills.

15.
Handb Clin Neurol ; 187: 89-108, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35964994

RESUMO

The auditory cortex of people with sensorineural hearing loss can be re-afferented using a cochlear implant (CI): a neural prosthesis that bypasses the damaged cells in the cochlea to directly stimulate the auditory nerve. Although CIs are the most successful neural prosthesis to date, some CI users still do not achieve satisfactory outcomes using these devices. To explain variability in outcomes, clinicians and researchers have increasingly focused their attention on neuroscientific investigations that examined how the auditory cortices respond to the electric signals that originate from the CI. This chapter provides an overview of the literature that examined how the auditory cortex changes its functional properties in response to inputs from the CI, in animal models and in humans. We focus first on the basic responses to sounds delivered through electrical hearing and, next, we examine the integrity of two fundamental aspects of the auditory system: tonotopy and processing of binaural cues. When addressing the effects of CIs in humans, we also consider speech-evoked responses. We conclude by discussing to what extent this neuroscientific literature can contribute to clinical practices and help to overcome variability in outcomes.


Assuntos
Córtex Auditivo , Implante Coclear , Implantes Cocleares , Percepção da Fala , Animais , Humanos , Plasticidade Neuronal , Percepção da Fala/fisiologia
16.
Iperception ; 13(2): 20416695221092471, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35463914

RESUMO

In Western music and in music of other cultures, minor chords, modes and intervals evoke sadness. It has been proposed that this emotional interpretation of melodic intervals (the distance between two pitches, expressed in semitones) is common to music and vocal expressions. Here, we asked expert musicians to transcribe into music scores spontaneous vocalizations of pre-verbal infants to test the hypothesis that melodic intervals that evoke sadness in music (i.e., minor 2nd) are more represented in cry compared to neutral utterances. Results showed that the unison, major 2nd, minor 2nd, major 3rd, minor 3rd, perfect 4th and perfect 5th are all represented in infant vocalizations. However, minor 2nd outnumbered all other intervals in cry vocalizations, but not in neutral babbling. These findings suggest that the association between minor intervals and sadness may develop in humans because a critically relevant social cue (infant cry) contains a statistical regularity: the association between minor 2nd and negative emotional valence.

17.
PLoS One ; 17(4): e0263509, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35421095

RESUMO

Localising sounds means having the ability to process auditory cues deriving from the interplay among sound waves, the head and the ears. When auditory cues change because of temporary or permanent hearing loss, sound localization becomes difficult and uncertain. The brain can adapt to altered auditory cues throughout life and multisensory training can promote the relearning of spatial hearing skills. Here, we study the training potentials of sound-oriented motor behaviour to test if a training based on manual actions toward sounds can learning effects that generalize to different auditory spatial tasks. We assessed spatial hearing relearning in normal hearing adults with a plugged ear by using visual virtual reality and body motion tracking. Participants performed two auditory tasks that entail explicit and implicit processing of sound position (head-pointing sound localization and audio-visual attention cueing, respectively), before and after having received a spatial training session in which they identified sound position by reaching to auditory sources nearby. Using a crossover design, the effects of the above-mentioned spatial training were compared to a control condition involving the same physical stimuli, but different task demands (i.e., a non-spatial discrimination of amplitude modulations in the sound). According to our findings, spatial hearing in one-ear plugged participants improved more after reaching to sound trainings rather than in the control condition. Training by reaching also modified head-movement behaviour during listening. Crucially, the improvements observed during training generalize also to a different sound localization task, possibly as a consequence of acquired and novel head-movement strategies.


Assuntos
Sinais (Psicologia) , Localização de Som , Estimulação Acústica , Adaptação Fisiológica , Adulto , Percepção Auditiva , Estudos Cross-Over , Audição , Humanos
18.
Exp Brain Res ; 240(3): 813-824, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35048159

RESUMO

In noisy contexts, sound discrimination improves when the auditory sources are separated in space. This phenomenon, named Spatial Release from Masking (SRM), arises from the interaction between the auditory information reaching the ear and spatial attention resources. To examine the relative contribution of these two factors, we exploited an audio-visual illusion in a hearing-in-noise task to create conditions in which the initial stimulation to the ears is held constant, while the perceived separation between speech and masker is changed illusorily (visual capture of sound). In two experiments, we asked participants to identify a string of five digits pronounced by a female voice, embedded in either energetic (Experiment 1) or informational (Experiment 2) noise, before reporting the perceived location of the heard digits. Critically, the distance between target digits and masking noise was manipulated both physically (from 22.5 to 75.0 degrees) and illusorily, by pairing target sounds with visual stimuli either at same (audio-visual congruent) or different positions (15 degrees offset, leftward or rightward: audio-visual incongruent). The proportion of correctly reported digits increased with the physical separation between the target and masker, as expected from SRM. However, despite effective visual capture of sounds, performance was not modulated by illusory changes of target sound position. Our results are compatible with a limited role of central factors in the SRM phenomenon, at least in our experimental setting. Moreover, they add to the controversial literature on the limited effects of audio-visual capture in auditory stream separation.


Assuntos
Mascaramento Perceptivo , Percepção da Fala , Estimulação Acústica , Feminino , Audição , Humanos , Ruído , Fala
19.
Int J Audiol ; 61(7): 561-573, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-34634214

RESUMO

OBJECTIVE: The aim of this study was to assess to what extent simultaneously-obtained measures of listening effort (task-evoked pupil dilation, verbal response time [RT], and self-rating) could be sensitive to auditory and cognitive manipulations in a speech perception task. The study also aimed to explore the possible relationship between RT and pupil dilation. DESIGN: A within-group design was adopted. All participants were administered the Matrix Sentence Test in 12 conditions (signal-to-noise ratios [SNR] of -3, -6, -9 dB; attentional resources focussed vs divided; spatial priors present vs absent). STUDY SAMPLE: Twenty-four normal-hearing adults, 20-41 years old (M = 23.5), were recruited in the study. RESULTS: A significant effect of the SNR was found for all measures. However, pupil dilation discriminated only partially between the SNRs. Neither of the cognitive manipulations were effective in modulating the measures. No relationship emerged between pupil dilation, RT and self-ratings. CONCLUSIONS: RT, pupil dilation, and self-ratings can be obtained simultaneously when administering speech perception tasks, even though some limitations remain related to the absence of a retention period after the listening phase. The sensitivity of the three measures to changes in the auditory environment differs. RTs and self-ratings proved most sensitive to changes in SNR.


Assuntos
Pupila , Percepção da Fala , Adulto , Percepção Auditiva , Humanos , Esforço de Escuta , Pupila/fisiologia , Tempo de Reação , Percepção da Fala/fisiologia , Adulto Jovem
20.
Ear Hear ; 43(1): 192-205, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34225320

RESUMO

OBJECTIVES: The aim of this study was to assess three-dimensional (3D) spatial hearing abilities in reaching space of children and adolescents fitted with bilateral cochlear implants (BCI). The study also investigated the impact of spontaneous head movements on sound localization abilities. DESIGN: BCI children (N = 18, aged between 8 and 17) and age-matched normal-hearing (NH) controls (N = 18) took part in the study. Tests were performed using immersive virtual reality equipment that allowed control over visual information and initial eye position, as well as real-time 3D motion tracking of head and hand position with subcentimeter accuracy. The experiment exploited these technical features to achieve trial-by-trial exact positioning in head-centered coordinates of a single loudspeaker used for real, near-field sound delivery, which was reproducible across trials and participants. Using this novel approach, broadband sounds were delivered at different azimuths within the participants' arm length, in front and back space, at two different distances from their heads. Continuous head-monitoring allowed us to compare two listening conditions: "head immobile" (no head movements allowed) and "head moving" (spontaneous head movements allowed). Sound localization performance was assessed by computing the mean 3D error (i.e. the difference in space between the X-Y-Z position of the loudspeaker and the participant's final hand position used to indicate the localization of the sound's source), as well as the percentage of front-back and left-right confusions in azimuth, and the discriminability between two nearby distances. Several clinical factors (i.e. age at test, interimplant interval, and duration of binaural experience) were also correlated with the mean 3D error. Finally, the Speech Spatial and Qualities of Hearing Scale was administered to BCI participants and their parents. RESULTS: Although BCI participants distinguished well between left and right sound sources, near-field spatial hearing remained challenging, particularly under the " head immobile" condition. Without visual priors of the sound position, response accuracy was lower than that of their NH peers, as evidenced by the mean 3D error (BCI: 55 cm, NH: 24 cm, p = 0.008). The BCI group mainly pointed along the interaural axis, corresponding to the position of their CI microphones. This led to important front-back confusions (44.6%). Distance discrimination also remained challenging for BCI users, mostly due to sound compression applied by their processor. Notably, BCI users benefitted from head movements under the "head moving" condition, with a significant decrease of the 3D error when pointing to front targets (p < 0.001). Interimplant interval was correlated with 3D error (p < 0.001), whereas no correlation with self-assessment of spatial hearing difficulties emerged (p = 0.9). CONCLUSIONS: In reaching space, BCI children and adolescents are able to extract enough auditory cues to discriminate sound side. However, without any visual cues or spontaneous head movements during sound emission, their localization abilities are substantially impaired for front-back and distance discrimination. Exploring the environment with head movements was a valuable strategy for improving sound localization within individuals with different clinical backgrounds. These novel findings could prompt new perspectives to better understand sound localization maturation in BCI children, and more broadly in patients with hearing loss.


Assuntos
Implante Coclear , Implantes Cocleares , Perda Auditiva , Localização de Som , Percepção da Fala , Adolescente , Criança , Implante Coclear/métodos , Movimentos da Cabeça , Audição , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...